77 research outputs found
Recovering Jointly Sparse Signals via Joint Basis Pursuit
This work considers recovery of signals that are sparse over two bases. For
instance, a signal might be sparse in both time and frequency, or a matrix can
be low rank and sparse simultaneously. To facilitate recovery, we consider
minimizing the sum of the -norms that correspond to each basis, which
is a tractable convex approach. We find novel optimality conditions which
indicates a gain over traditional approaches where minimization is
done over only one basis. Next, we analyze these optimality conditions for the
particular case of time-frequency bases. Denoting sparsity in the first and
second bases by respectively, we show that, for a general class of
signals, using this approach, one requires as small as
measurements for successful recovery hence
overcoming the classical requirement of
for
minimization when . Extensive simulations show that, our
analysis is approximately tight.Comment: 8 pages, 1 figure, submitted to ISIT 201
New Null Space Results and Recovery Thresholds for Matrix Rank Minimization
Nuclear norm minimization (NNM) has recently gained significant attention for
its use in rank minimization problems. Similar to compressed sensing, using
null space characterizations, recovery thresholds for NNM have been studied in
\cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are
far from optimal, especially in the low rank region. In this paper we apply the
recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null
space conditions of NNM. The resulting thresholds are significantly better and
in particular our weak threshold appears to match with simulation results.
Further our curves suggest for any rank growing linearly with matrix size
we need only three times of oversampling (the model complexity) for weak
recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional
and strong thresholds. Additionally a separate analysis is given for special
case of positive semidefinite matrices. We conclude by discussing simulation
results and future research directions.Comment: 28 pages, 2 figure
Simple Error Bounds for Regularized Noisy Linear Inverse Problems
Consider estimating a structured signal from linear,
underdetermined and noisy measurements
, via solving a variant of the
lasso algorithm: . Here, is a
convex function aiming to promote the structure of , say
-norm to promote sparsity or nuclear norm to promote low-rankness. We
assume that the entries of are independent and normally
distributed and make no assumptions on the noise vector , other
than it being independent of . Under this generic setup, we derive
a general, non-asymptotic and rather tight upper bound on the -norm of
the estimation error . Our bound is
geometric in nature and obeys a simple formula; the roles of , and
are all captured by a single summary parameter
, termed the Gaussian squared
distance to the scaled subdifferential. We connect our result to the literature
and verify its validity through simulations.Comment: 6pages, 2 figur
Sharp Time--Data Tradeoffs for Linear Inverse Problems
In this paper we characterize sharp time-data tradeoffs for optimization
problems used for solving linear inverse problems. We focus on the minimization
of a least-squares objective subject to a constraint defined as the sub-level
set of a penalty function. We present a unified convergence analysis of the
gradient projection algorithm applied to such problems. We sharply characterize
the convergence rate associated with a wide variety of random measurement
ensembles in terms of the number of measurements and structural complexity of
the signal with respect to the chosen penalty function. The results apply to
both convex and nonconvex constraints, demonstrating that a linear convergence
rate is attainable even though the least squares objective is not strongly
convex in these settings. When specialized to Gaussian measurements our results
show that such linear convergence occurs when the number of measurements is
merely 4 times the minimal number required to recover the desired signal at all
(a.k.a. the phase transition). We also achieve a slower but geometric rate of
convergence precisely above the phase transition point. Extensive numerical
results suggest that the derived rates exactly match the empirical performance
- β¦